神经线性模型(NLM)是深度贝叶斯模型,通过从数据中学习特征,然后对这些特征进行贝叶斯线性回归来产生预测的不确定性。尽管他们受欢迎,但很少有作品专注于有条理地评估这些模型的预测性不确定性。在这项工作中,我们证明了NLMS的传统培训程序急剧低估了分发输入的不确定性,因此它们不能在风险敏感的应用中暂时部署。我们确定了这种行为的基本原因,并提出了一种新的培训框架,捕获下游任务的有用预测不确定性。
translated by 谷歌翻译
The lack of any sender authentication mechanism in place makes CAN (Controller Area Network) vulnerable to security threats. For instance, an attacker can impersonate an ECU (Electronic Control Unit) on the bus and send spoofed messages unobtrusively with the identifier of the impersonated ECU. To address the insecure nature of the system, this thesis demonstrates a sender authentication technique that uses power consumption measurements of the electronic control units (ECUs) and a classification model to determine the transmitting states of the ECUs. The method's evaluation in real-world settings shows that the technique applies in a broad range of operating conditions and achieves good accuracy. A key challenge of machine learning-based security controls is the potential of false positives. A false-positive alert may induce panic in operators, lead to incorrect reactions, and in the long run cause alarm fatigue. For reliable decision-making in such a circumstance, knowing the cause for unusual model behavior is essential. But, the black-box nature of these models makes them uninterpretable. Therefore, another contribution of this thesis explores explanation techniques for inputs of type image and time series that (1) assign weights to individual inputs based on their sensitivity toward the target class, (2) and quantify the variations in the explanation by reconstructing the sensitive regions of the inputs using a generative model. In summary, this thesis (https://uwspace.uwaterloo.ca/handle/10012/18134) presents methods for addressing the security and interpretability in automotive systems, which can also be applied in other settings where safe, transparent, and reliable decision-making is crucial.
translated by 谷歌翻译
Deep latent variable models have achieved significant empirical successes in model-based reinforcement learning (RL) due to their expressiveness in modeling complex transition dynamics. On the other hand, it remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of RL. In this paper, we provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle in the face of uncertainty for exploration. In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models. Theoretically, we establish the sample complexity of the proposed approach in the online and offline settings. Empirically, we demonstrate superior performance over current state-of-the-art algorithms across various benchmarks.
translated by 谷歌翻译
Warning: this paper contains content that may be offensive or upsetting. In the current context where online platforms have been effectively weaponized in a variety of geo-political events and social issues, Internet memes make fair content moderation at scale even more difficult. Existing work on meme classification and tracking has focused on black-box methods that do not explicitly consider the semantics of the memes or the context of their creation. In this paper, we pursue a modular and explainable architecture for Internet meme understanding. We design and implement multimodal classification methods that perform example- and prototype-based reasoning over training cases, while leveraging both textual and visual SOTA models to represent the individual cases. We study the relevance of our modular and explainable models in detecting harmful memes on two existing tasks: Hate Speech Detection and Misogyny Classification. We compare the performance between example- and prototype-based methods, and between text, vision, and multimodal models, across different categories of harmfulness (e.g., stereotype and objectification). We devise a user-friendly interface that facilitates the comparative analysis of examples retrieved by all of our models for any given meme, informing the community about the strengths and limitations of these explainable methods.
translated by 谷歌翻译
Collecting sufficient labeled data for spoken language understanding (SLU) is expensive and time-consuming. Recent studies achieved promising results by using pre-trained models in low-resource scenarios. Inspired by this, we aim to ask: which (if any) pre-training strategies can improve performance across SLU benchmarks? To answer this question, we employ four types of pre-trained models and their combinations for SLU. We leverage self-supervised speech and language models (LM) pre-trained on large quantities of unpaired data to extract strong speech and text representations. We also explore using supervised models pre-trained on larger external automatic speech recognition (ASR) or SLU corpora. We conduct extensive experiments on the SLU Evaluation (SLUE) benchmark and observe self-supervised pre-trained models to be more powerful, with pre-trained LM and speech models being most beneficial for the Sentiment Analysis and Named Entity Recognition task, respectively.
translated by 谷歌翻译
Observational studies have recently received significant attention from the machine learning community due to the increasingly available non-experimental observational data and the limitations of the experimental studies, such as considerable cost, impracticality, small and less representative sample sizes, etc. In observational studies, de-confounding is a fundamental problem of individualised treatment effects (ITE) estimation. This paper proposes disentangled representations with adversarial training to selectively balance the confounders in the binary treatment setting for the ITE estimation. The adversarial training of treatment policy selectively encourages treatment-agnostic balanced representations for the confounders and helps to estimate the ITE in the observational studies via counterfactual inference. Empirical results on synthetic and real-world datasets, with varying degrees of confounding, prove that our proposed approach improves the state-of-the-art methods in achieving lower error in the ITE estimation.
translated by 谷歌翻译
我们解决了在室内环境中对于具有有限感应功能和有效载荷/功率限制的微型航空车的高效3-D勘探问题。我们开发了一个室内探索框架,该框架利用学习来预测看不见的区域的占用,提取语义特征,样本观点,以预测不同探索目标的信息收益以及计划的信息轨迹,以实现安全和智能的探索。在模拟和实际环境中进行的广泛实验表明,就结构化室内环境中的总路径长度而言,所提出的方法的表现优于最先进的勘探框架,并且在勘探过程中的成功率更高。
translated by 谷歌翻译
在学习到等级的问题中,特权功能是在模型培训期间可用的功能,但在测试时不可用。这种特征自然出现在商品推荐系统中;例如,“用户单击此项目”作为功能可预测离线数据中的“用户购买此项目”,但在线服务期间显然不可用。特权功能的另一个来源是那些太昂贵而无法在线计算但可行的功能。特权功能蒸馏(PFD)是指自然想法:使用所有功能(包括特权的)训练“老师”模型,然后使用它来训练不使用特权功能的“学生”模型。在本文中,我们首先在经验上研究了三个公共排名数据集和从亚马逊日志中得出的工业规模排名问题。我们表明,PFD在所有这些数据集上都超过了几个基线(无缩写,预处理,自我验证和广义蒸馏)。接下来,我们通过经验消融研究和线性模型的理论分析来分析PFD的原因和何时表现良好。两项研究都发现了一个有趣的非主持酮行为:随着特权特征的预测能力增加,最初的学生模型的性能最初会增加,但随后降低。我们表明了后来的表现降低的原因是,一个非常预测的特权教师会产生较高的差异的预测,从而导致较高的差异学生估计和劣等测试表现。
translated by 谷歌翻译
批次归一化被广泛用于深度学习以使中间激活归一化。深层网络臭名昭著地增加了训练的复杂性,要​​求仔细的体重初始化,需要较低的学习率等。这些问题已通过批归一化解决(\ textbf {bn})来解决,通过将激活的输入归功于零平均值和单位标准偏差。使培训过程的批归归量化部分显着加速了非常深网络的训练过程。一个新的研究领域正在进行研究\ textbf {bn}成功背后的确切理论解释。这些理论见解中的大多数试图通过将其对优化,体重量表不变性和正则化的影响来解释\ textbf {bn}的好处。尽管\ textbf {bn}在加速概括方面取得了不可否认的成功,但分析的差距将\ textbf {bn}与正则化参数的效果相关联。本文旨在通过\ textbf {bn}对正则化参数的数据依赖性自动调整,并具有分析证明。我们已将\ textbf {bn}提出为对非 - \ textbf {bn}权重的约束优化,通过该优化,我们通过它演示其数据统计信息依赖于正则化参数的自动调整。我们还为其在嘈杂的输入方案下的行为提供了分析证明,该方案揭示了正则化参数的信号与噪声调整。我们还通过MNIST数据集实验的经验结果证实了我们的主张。
translated by 谷歌翻译
我们考虑在平均场比赛中在线加强学习。与现有作品相反,我们通过开发一种使用通用代理的单个样本路径来估算均值场和最佳策略的算法来减轻对均值甲骨文的需求。我们称此沙盒学习为其,因为它可以用作在多代理非合作环境中运行的任何代理商的温暖启动。我们采用了两种时间尺度的方法,在该方法中,平均场的在线固定点递归在较慢的时间表上运行,并与通用代理更快的时间范围内的控制策略更新同时进行。在足够的勘探条件下,我们提供有限的样本收敛保证,从平均场和控制策略融合到平均场平衡方面。沙盒学习算法的样本复杂性为$ \ Mathcal {o}(\ epsilon^{ - 4})$。最后,我们从经验上证明了沙盒学习算法在交通拥堵游戏中的有效性。
translated by 谷歌翻译